Current Issue : January - March Volume : 2020 Issue Number : 1 Articles : 5 Articles
Software stabilitymeans the resistance to theamplification of changes in software. It has become one of themost important attributes\nthat affect maintenance cost. To control the maintenance cost, many approaches have been proposed to measure software stability.\nHowever, it is still a very difficult task to evaluate the software stability especially when software becomes very large and complex.\nIn this paper, we propose to characterize software stability via change propagation simulation. First, we propose a class coupling\nnetwork (CCN) to model software structure at the class level.Then, we analyze the change propagation process in the CCNby using\na simulation way, and by doing so, we develop a novel metric, SS (software stability), to measure software stability. Our SS metric\nis validated theoretically using the widely accepted Weyukerâ??s properties and empirically using a set of open source Java software\nsystems. The theoretical results show that our SS metric satisfies most of Weyukerâ??s properties with only two exceptions, and the\nempirical results show that our metric is an effective indicator for software quality improvement and class importance. Empirical\nresults also show that our approach has the ability to be applied to large software systems....
Background: The development of high throughput sequencing techniques provides us with the possibilities to\nobtain large data sets, which capture the effect of dynamic perturbations on cellular processes. However, because\nof the dynamic nature of these processes, the analysis of the results is challenging. Therefore, there is a great need\nfor bioinformatics tools that address this problem.\nResults: Here we present DynOVis, a network visualization tool that can capture dynamic dose-over-time effects in\nbiological networks. DynOVis is an integrated work frame of R packages and JavaScript libraries and offers a forcedirected\ngraph network style, involving multiple network analysis methods such as degree threshold, but more\nimportantly, it allows for node expression animations as well as a frame-by-frame view of the dynamic exposure.\nValuable biological information can be highlighted on the nodes in the network, by the integration of various\ndatabases within DynOVis. This information includes pathway-to-gene associations from ConsensusPathDB, diseaseto-\ngene associations from the Comparative Toxicogenomics databases, as well as Entrez gene ID, gene symbol,\ngene synonyms and gene type from the NCBI database.\nConclusions: DynOVis could be a useful tool to analyse biological networks which have a dynamic nature. It can\nvisualize the dynamic perturbations in biological networks and allows the user to investigate the changes over\ntime. The integrated data from various online databases makes it easy to identify the biological relevance of nodes\nin the network. With DynOVis we offer a service that is easy to use and does not require any bioinformatics skills to\nvisualize a network....
Background: High throughput DNA/RNA sequencing has revolutionized biological and clinical research.\nSequencing is widely used, and generates very large amounts of data, mainly due to reduced cost and advanced\ntechnologies. Quickly assessing the quality of giga-to-tera base levels of sequencing data has become a routine but\nimportant task. Identification and elimination of low-quality sequence data is crucial for reliability of downstream\nanalysis results. There is a need for a high-speed tool that uses optimized parallel programming for batch\nprocessing and simply gauges the quality of sequencing data from multiple datasets independent of any other\nprocessing steps.\nResults: FQStat is a stand-alone, platform-independent software tool that assesses the quality of FASTQ files using\nparallel programming. Based on the machine architecture and input data, FQStat automatically determines the\nnumber of cores and the amount of memory to be allocated per file for optimum performance. Our results indicate\nthat in a core-limited case, core assignment overhead exceeds the benefit of additional cores. In a core-unlimited\ncase, there is a saturation point reached in performance by increasingly assigning additional cores per file. We also\nshow that memory allocation per file has a lower priority in performance when compared to the allocation of\ncores. FQStatâ??s output is summarized in HTML web page, tab-delimited text file, and high-resolution image formats.\nFQStat calculates and plots read count, read length, quality score, and high-quality base statistics. FQStat identifies\nand marks low-quality sequencing data to suggest removal from downstream analysis. We applied FQStat on real\nsequencing data to optimize performance and to demonstrate its capabilities. We also compared FQStatâ??s\nperformance to similar quality control (QC) tools that utilize parallel programming and attained improvements in\nrun time.\nConclusions: FQStat is a user-friendly tool with a graphical interface that employs a parallel programming\narchitecture and automatically optimizes its performance to generate quality control statistics for sequencing data.\nUnlike existing tools, these statistics are calculated for multiple datasets and separately at the â??lane,â? â??sample,â? and\nâ??experimentâ? level to identify subsets of the samples with low quality, thereby preventing the loss of complete\nsamples when reliable data can still be obtained....
The ISO GPS and ASME Y14.5 standards have defined dimensional and geometrical tolerance as a way to express the limits of\nsurface part variations with respect to nominal model surfaces. A quality-control process using a measuring device verifies the\nconformity of the parts to these tolerances. To convert the control measurement points as captured by a device such as a\ncoordinate measurement machine (CMM) or noncontact scan, it is necessary to select the appropriate algorithm (e.g., least square\nsize and maximum inscribed size) and to include the working hypotheses (e.g., treatment of outliers, noise filtering, and missing\ndata). This means that the operator conducting the analysis must decide on which algorithm to use. Through a literature review of\ncurrent software programs and algorithms, many inaccuracies were found. A benchmark was therefore developed to compare the\nalgorithm performance of three computer-aided inspection (CAI) software programs. From the same point cloud and on the same\nspecifications (requirements and tolerances), three CAI options have been tested with several dimensional and\ngeometrical features....
During the software development process, the decisionmaker (DM)must master many variables inherent in this process. Software\nreleases represent the order in which a set of requirements is implemented and delivered to the customer. Structuring and\nenumerating a set of releases with prioritized requirements represents a challenging task because the requirements contain their\ncharacteristics, such as technical precedence, the cost required for implementation, the importance that one or more customers add\nto the requirement, among other factors. To facilitate this work of selection and prioritization of releases, the decision maker may\nadopt some support tools. One field of study already known to solve this type of problemis the Search-Based Software Engineering\n(SBSE) that uses metaheuristics as a means to find reasonable solutions taking into account a set of well-defined objectives and\nconstraints. In this paper, we seek to increase the possibilities of solving the Next Release Problem using the methods available in\nVerbal Decision Analysis (VDA). We generate a problem and submit it so that the VDA and SBSE methods try to resolve it. To\nvalidate this research, we compared the results obtained through VDA and compared with the SBSE results.We present and discuss\nthe results in the respective sections....
Loading....